Background and Purpose: Colorectal cancer is a common fatal malignancy, the fourth most common cancer in men, and the third most common cancer in women worldwide. Timely detection of cancer in its early stages is essential for treating the disease. Currently, there is a lack of datasets for histopathological image segmentation of rectal cancer, which often hampers the assessment accuracy when computer technology is used to aid in diagnosis. Methods: This present study provided a new publicly available Enteroscope Biopsy Histopathological Hematoxylin and Eosin Image Dataset for Image Segmentation Tasks (EBHI-Seg). To demonstrate the validity and extensiveness of EBHI-Seg, the experimental results for EBHI-Seg are evaluated using classical machine learning methods and deep learning methods. Results: The experimental results showed that deep learning methods had a better image segmentation performance when utilizing EBHI-Seg. The maximum accuracy of the Dice evaluation metric for the classical machine learning method is 0.948, while the Dice evaluation metric for the deep learning method is 0.965. Conclusion: This publicly available dataset contained 5,170 images of six types of tumor differentiation stages and the corresponding ground truth images. The dataset can provide researchers with new segmentation algorithms for medical diagnosis of colorectal cancer, which can be used in the clinical setting to help doctors and patients.
translated by 谷歌翻译
Anomaly Detection (AD), as a critical problem, has been widely discussed. In this paper, we specialize in one specific problem, Visual Defect Detection (VDD), in many industrial applications. And in practice, defect image samples are very rare and difficult to collect. Thus, we focus on the unsupervised visual defect detection and localization tasks and propose a novel framework based on the recent score-based generative models, which synthesize the real image by iterative denoising through stochastic differential equations (SDEs). Our work is inspired by the fact that with noise injected into the original image, the defects may be changed into normal cases in the denoising process (i.e., reconstruction). First, based on the assumption that the anomalous data lie in the low probability density region of the normal data distribution, we explain a common phenomenon that occurs when reconstruction-based approaches are applied to VDD: normal pixels also change during the reconstruction process. Second, due to the differences in normal pixels between the reconstructed and original images, a time-dependent gradient value (i.e., score) of normal data distribution is utilized as a metric, rather than reconstruction loss, to gauge the defects. Third, a novel $T$ scales approach is developed to dramatically reduce the required number of iterations, accelerating the inference process. These practices allow our model to generalize VDD in an unsupervised manner while maintaining reasonably good performance. We evaluate our method on several datasets to demonstrate its effectiveness.
translated by 谷歌翻译
Recent years we have witnessed rapid development in NeRF-based image rendering due to its high quality. However, point clouds rendering is somehow less explored. Compared to NeRF-based rendering which suffers from dense spatial sampling, point clouds rendering is naturally less computation intensive, which enables its deployment in mobile computing device. In this work, we focus on boosting the image quality of point clouds rendering with a compact model design. We first analyze the adaption of the volume rendering formulation on point clouds. Based on the analysis, we simplify the NeRF representation to a spatial mapping function which only requires single evaluation per pixel. Further, motivated by ray marching, we rectify the the noisy raw point clouds to the estimated intersection between rays and surfaces as queried coordinates, which could avoid \textit{spatial frequency collapse} and neighbor point disturbance. Composed of rasterization, spatial mapping and the refinement stages, our method achieves the state-of-the-art performance on point clouds rendering, outperforming prior works by notable margins, with a smaller model size. We obtain a PSNR of 31.74 on NeRF-Synthetic, 25.88 on ScanNet and 30.81 on DTU. Code and data are publicly available at https://github.com/seanywang0408/RadianceMapping.
translated by 谷歌翻译
随着大数据时代的出现,数据质量问题变得越来越重要。在许多因素中,缺少价值的数据是一个主要问题,因此开发有效的插补模型是研究界的关键主题。最近,一个主要的研究方向是采用神经网络模型,例如自组织映射或自动编码器来填充缺失值。但是,这些经典方法几乎无法在数据属性之间同时发现相关特征和共同特征。特别是,对于经典的自动编码器来说,这是一个非常典型的问题,他们经常学习无效的恒定映射,从而极大地伤害了填充性能。为了解决上述问题,我们建议并开发基于功能融合增强自动编码器的缺失值填充模型。我们首先设计并集成到自动编码器中,一个隐藏的层,该层由脱落神经元和径向基函数神经元组成,该神经元可以增强学习相关特征和共同特征的能力。此外,我们基于动态聚类(MVDC)制定了缺失的值填充策略,该策略已纳入迭代优化过程。该设计可以增强多维功能融合能力,从而提高动态协作缺失填充性能。通过实验比较与许多缺失值填充方法的实验比较来验证我们的模型的有效性,这些方法在七个数据集上进行了测试,而缺失率不同。
translated by 谷歌翻译
非接触式粒子操纵(NPM)技术将人类的分析能力大大扩展到了微观和纳米量表,这反过来又大大促进了材料科学和生命科学的发展。尽管从机器人的角度来看,通过电力,磁性和光场取得了巨大的成功,但它仍然是劳动密集型操作,因为在早期准备阶段,专业人力援助以某种方式是强制性的。因此,出现运动颗粒的自动非接触夹捕获是值得的,特别是对于粒子样品罕见,脆弱或接触敏感的应用。利用最新的动态声场调节技术,尤其是通过从微尺度到亚中心尺度的声学操纵的巨大可扩展性,我们提出了一个自动化的非接触式微粒诱捕,该非接触式捕获具有超声梯级系统和显微镜系统和显微镜系统的移动微粒本文的视觉。据我们所知,这项工作的主要贡献是首次通过诉诸机器人方法来实现声学NPM场中完全自动化的微颗粒捕获。简而言之,通过参考其计算和生成的声学陷阱区域来观察并通过双眼微观视觉系统观察并预测粒子的移动状态。在这项工作中,非连接机器人最终效应器的手眼关系问题也解决了。实验证明了这项工作的有效性。
translated by 谷歌翻译
电线杆和建筑物边缘经常是城市道路上可观察到的对象,为各种计算机视觉任务提供了可靠的提示。为了重复提取它们作为特征并在离散激光镜头框架之间进行注册,我们提出了第一个基于学习的功能分割和LIDAR点云中3D线的描述模型。为了训练我们的模型,而无需耗时和乏味的数据标记过程,我们首先生成了目标线基本外观的合成原始图,并构建一个迭代线自动标记的过程,以逐步完善真实激光扫描的线路标签。我们的分割模型可以在任意规模的扰动下提取线,我们使用共享的EDGECONV编码层共同训练两个分割和描述符头。基于模型,我们可以在没有初始转换提示的情况下构建一个高度可用的全局注册模块,用于点云注册。实验表明,我们基于线的注册方法对基于最先进的方法的方法具有很高的竞争力。我们的代码可在https://github.com/zxrzju/superline3d.git上找到。
translated by 谷歌翻译
本文提出了一种新颖的统一特征优化(UFO)范式,用于训练和在现实世界和大规模场景下进行深层模型,这需要集合多个AI功能。不明飞行物的目标是通过对所有任务进行大规模预修。与众所周知的基础模型相比,UFO具有两个不同的重点,即相对较小的模型大小,没有适应性成本:1)UFO以多任务学习方式将广泛的任务挤入中等尺寸的统一模型中并在转移到下游任务时进一步修剪模型大小。 2)不明飞行物不强调转移到新任务。相反,它旨在使修剪模型专门用于一个或多个已经看到的任务。有了这两个特征,UFO为灵活的部署提供了极大的便利,同时保持了大规模预处理的好处。 UFO的一个关键优点是修剪过程不仅可以减少模型的大小和推理消耗,而且还提高了某些任务的准确性。具体而言,UFO考虑了多任务培训,并对统一模型产生了两倍的影响:一些密切相关的任务具有相互利益,而某些任务相互冲突。不明飞行物设法通过新颖的网络体系结构搜索(NAS)方法来减少冲突并保留相互利益。对各种深度表示学习任务(即面部识别,人重新识别,车辆重新识别和产品检索)的实验表明,从UFO中修剪的模型比单件任务训练的对应物更高,但却具有更高的准确性较小的型号大小,验证不明飞行物的概念。此外,UFO还支持发布170亿个参数计算机视觉(CV)基础模型,该模型是该行业中最大的CV模型。
translated by 谷歌翻译
在这项工作中,我们提出了叙述,这是一种新颖的管道,可以以逼真的方式同时编辑肖像照明和观点。作为一种混合神经形态的面部模型,叙述了几何学感知生成方法和正常辅助物理面部模型的互补益处。简而言之,叙述首先将输入肖像转变为粗糙的几何形状,并采用神经渲染来产生类似于输入的图像,并产生令人信服的姿势变化。但是,反演步骤引入了不匹配,带来了较少面部细节的低质量图像。因此,我们进一步估计了师范的肖像,以增强粗糙的几何形状,从而创建高保真的物理面部模型。特别是,我们融合了神经和身体渲染,以补偿不完善的反转,从而产生了现实和视图一致的新颖透视图像。在重新阶段,以前的作品着重于单一视图肖像重新审议,但也忽略了不同观点之间的一致性,引导不稳定和不一致的照明效果以进行视图变化。我们通过将其多视图输入正常地图与物理面部模型统一,以解决此问题。叙事通过一致的正常地图进行重新进行重新,施加了跨视图的约束并表现出稳定且连贯的照明效果。我们在实验上证明,叙述在先前的工作中取得了更现实的,可靠的结果。我们进一步使用动画和样式转移工具进行介绍,从而分别或组合姿势变化,灯光变化,面部动画和样式转移,所有这些都以摄影质量为单位。我们展示了生动的自由视图面部动画以及3D感知可靠的风格化,可帮助促进各种AR/VR应用程序,例如虚拟摄影,3D视频会议和后期制作。
translated by 谷歌翻译
非负矩阵分解(NMF)已广泛用于降低机器学习的尺寸。但是,传统的NMF无法正确处理异常值,因此对噪声敏感。为了提高NMF的鲁棒性,本文提出了一种自适应加权NMF,它引入了权重,以强调每个数据点的不同重要性,因此降低了对噪声数据的算法敏感性。它与使用缓慢生长相似性度量的现有强大NMF大不相同。具体而言,提出了两种实现这一目标的策略:模糊加权技术和熵加权技术,两者都导致具有简单形式的迭代解决方案。实验结果表明,新方法在具有噪声的几个真实数据集上具有更健壮的特征表示,而不是进行噪声。
translated by 谷歌翻译
天气预报是一项有吸引力的挑战性任务,因为它对人类生活和大气运动的复杂性的影响。在大量历史观察到的时间序列数据的支持下,该任务适用于数据驱动的方法,尤其是深层神经网络。最近,基于图神经网络(GNN)方法在时空预测方面取得了出色的性能。但是,基于规范的GNNS方法仅分别对每个站的气象变量的局部图或整个车站的全局图进行建模,从而缺乏不同站点的气象变量之间的信息相互作用。在本文中,我们提出了一种新型的层次时空图形神经网络(Histgnn),以模拟多个站点气象变量之间的跨区域时空相关性。自适应图学习层和空间图卷积用于构建自学习图,并研究可变级别和站点级别图的节点之间的隐藏依赖性。为了捕获时间模式,扩张的成立为GATE时间卷积的主干旨在对长而各种气象趋势进行建模。此外,提出了动态的交互学习来构建在层次图中传递的双向信息。三个现实世界中的气象数据集的实验结果表明,史基元超过7个基准的卓越性能,并且将误差降低了4.2%至11.6%,尤其是与最先进的天气预测方法相比。
translated by 谷歌翻译